Clustering and classification
The data I will be using in the following assignment is called Boston and it is about the housing values in suburbs of Boston. There are 14 variables of the data are: crim -crime rate; zn - proportion of residential land; indus - proportion of non-retail business acres; chas - Charles River dummy variable (= 1 if tract bounds river; 0 otherwise); nox - nitrogen oxides concentration; rm -average number of rooms per dwelling;age - proportion of owner-occupied units built prior to 1940; dis - weighted mean of distances to five Boston employment centres; rad - index of accessibility to radial highways; tax - full-value property-tax rate per $10,000; ptratio- pupil-teacher ratio by town; black- 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town; lstat -lower status of the population (percent); medv - median value of owner-occupied homes in $1000s. Below you can see the structure and dimensions of the data.
if (!require("tidyverse")) {
install.packages("tidyverse", repos="http://cran.rstudio.com/")
library("tidyverse")
}
## Loading required package: tidyverse
## Loading tidyverse: ggplot2
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Loading tidyverse: dplyr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag(): dplyr, stats
if (!require("corrplot")) {
install.packages("corrplot", repos="http://cran.rstudio.com/")
library("corrplot")
}
## Loading required package: corrplot
if (!require("ggplot2")) {
install.packages("ggplot2", repos="http://cran.rstudio.com/")
library("ggplot2")
}
if (!require("MASS")) {
install.packages("MASS", repos="http://cran.rstudio.com/")
library("MASS")
}
## Loading required package: MASS
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
if (!require("dplyr")) {
install.packages("dplyr", repos="http://cran.rstudio.com/")
library("dlpyr")
}
if (!require("GGally")) {
install.packages("GGally", repos="http://cran.rstudio.com/")
library("GGally")
}
## Loading required package: GGally
##
## Attaching package: 'GGally'
## The following object is masked from 'package:dplyr':
##
## nasa
library(psych)
##
## Attaching package: 'psych'
## The following objects are masked from 'package:ggplot2':
##
## %+%, alpha
library(tidyr)
describe(Boston)
## vars n mean sd median trimmed mad min max range
## crim 1 506 3.61 8.60 0.26 1.68 0.33 0.01 88.98 88.97
## zn 2 506 11.36 23.32 0.00 5.08 0.00 0.00 100.00 100.00
## indus 3 506 11.14 6.86 9.69 10.93 9.37 0.46 27.74 27.28
## chas 4 506 0.07 0.25 0.00 0.00 0.00 0.00 1.00 1.00
## nox 5 506 0.55 0.12 0.54 0.55 0.13 0.38 0.87 0.49
## rm 6 506 6.28 0.70 6.21 6.25 0.51 3.56 8.78 5.22
## age 7 506 68.57 28.15 77.50 71.20 28.98 2.90 100.00 97.10
## dis 8 506 3.80 2.11 3.21 3.54 1.91 1.13 12.13 11.00
## rad 9 506 9.55 8.71 5.00 8.73 2.97 1.00 24.00 23.00
## tax 10 506 408.24 168.54 330.00 400.04 108.23 187.00 711.00 524.00
## ptratio 11 506 18.46 2.16 19.05 18.66 1.70 12.60 22.00 9.40
## black 12 506 356.67 91.29 391.44 383.17 8.09 0.32 396.90 396.58
## lstat 13 506 12.65 7.14 11.36 11.90 7.11 1.73 37.97 36.24
## medv 14 506 22.53 9.20 21.20 21.56 5.93 5.00 50.00 45.00
## skew kurtosis se
## crim 5.19 36.60 0.38
## zn 2.21 3.95 1.04
## indus 0.29 -1.24 0.30
## chas 3.39 9.48 0.01
## nox 0.72 -0.09 0.01
## rm 0.40 1.84 0.03
## age -0.60 -0.98 1.25
## dis 1.01 0.46 0.09
## rad 1.00 -0.88 0.39
## tax 0.67 -1.15 7.49
## ptratio -0.80 -0.30 0.10
## black -2.87 7.10 4.06
## lstat 0.90 0.46 0.32
## medv 1.10 1.45 0.41
dim(Boston)
## [1] 506 14
Below I scaled the Boston dataset for standardize purposes. As you can see from the summary of scaled data variables have now, when scaled, higher values. I created a categorical variables of crime rate from the scaled crime rate.
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)
scaled_crim <- boston_scaled$crim
summary(scaled_crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419400 -0.410600 -0.390300 0.000000 0.007389 9.924000
bins <- quantile(scaled_crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
crime <- cut(scaled_crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
I split the original data to test and train sets so that allowed me to check how well the model actually work. The train set is for the training of the model and the test set is for predicting the new data. Below you can see the LDA plot that has crime rate as the target variable and all the other variables of the boston scaled dataset as predictor variables.
n <- nrow(boston_scaled)
ind <- sample(n, size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2698020 0.2277228 0.2475248 0.2549505
##
## Group means:
## zn indus chas nox rm
## low 1.0466992 -0.9151225 -0.12784833 -0.8864031 0.47712638
## med_low -0.1062395 -0.3221942 0.02723291 -0.6044648 -0.09511587
## med_high -0.3834775 0.1560446 0.20012296 0.4149629 0.00940144
## high -0.4872402 1.0170891 -0.08120770 1.0733120 -0.41849556
## age dis rad tax ptratio
## low -0.8989117 0.9197636 -0.6868521 -0.7447710 -0.47399613
## med_low -0.4444889 0.4281633 -0.5287261 -0.4771764 -0.08790808
## med_high 0.4593471 -0.3943576 -0.4317555 -0.3309488 -0.29448020
## high 0.8082843 -0.8636797 1.6384176 1.5142626 0.78111358
## black lstat medv
## low 0.37802809 -0.77966705 0.56960901
## med_low 0.34419398 -0.17013032 0.02444273
## med_high 0.09023912 0.04844332 0.12408185
## high -0.83061545 0.93667846 -0.72815138
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10648347 0.66496771 -1.14400053
## indus -0.01139250 -0.14103452 0.28274083
## chas -0.02073358 -0.03555109 0.12969595
## nox 0.40989556 -0.64130802 -1.25951061
## rm 0.04148281 0.03919500 -0.04601693
## age 0.25255264 -0.43234764 -0.36842720
## dis -0.11119785 -0.15823938 0.32449567
## rad 3.26930575 0.96630067 -0.02781423
## tax 0.09721771 -0.04174550 0.66504185
## ptratio 0.14559695 0.03875497 -0.42944710
## black -0.10842586 0.01482670 0.16942284
## lstat 0.16927251 -0.12792303 0.48618607
## medv 0.03425574 -0.31664997 -0.14998749
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9485 0.0367 0.0147
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)

On below you can see a cross-table of the results of LDA prediction - as you can see the model does not predicts rather well the crime rates with the new data.
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 7 10 1 0
## med_low 6 15 13 0
## med_high 0 7 17 2
## high 0 0 0 24
I reloaded and standardized the original Boston dataset, calculated the distances between the obsevations (below) and ran k-mean algorithn on the dataset. For me, optimal number of clusters are 7. There are seven sensible clusters, groups, for this dataset.
data('Boston')
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4620 4.8240 4.9110 6.1860 14.4000
km <-kmeans(dist_eu, centers = 7)
pairs(boston_scaled, col = km$cluster)

Dimensionality reduction techniques
Altogether human data has 155 observation and 9 variables, which are following:
Country = country where the data has been collected
lifeExp = Life expectancy at birth
GNI = Gross National Income per capita
educationExp = Expected years of schooling
mortality = Maternal mortality ratio
repr.parliament = Percetange of female representatives in parliament
eduRatio = the ratio of Female and Male populations with secondary education in each country
labourRatio = the ratio of labour force participation of females and males in each country
birthRate = Adolescent birth rate
human <- read.csv(file = "/Applications/IODS-project/data/human1.csv", sep =",", header =TRUE)
human$X <- NULL
dim(human)
## [1] 155 9
describe(human)
## vars n mean sd median trimmed mad min max
## Country* 1 155 78.00 44.89 78.00 78.00 57.82 1.00 155.00
## lifeExp 2 155 71.65 8.33 74.20 72.40 7.56 49.00 83.50
## GNI* 3 155 78.00 44.89 78.00 78.00 57.82 1.00 155.00
## educationExp 4 155 13.18 2.84 13.50 13.24 2.97 5.40 20.20
## birthRate 5 155 47.16 41.11 33.60 41.62 35.73 0.60 204.80
## mortality 6 155 149.08 211.79 49.00 104.70 63.75 1.00 1100.00
## repr.parliament 7 155 20.91 11.49 19.30 20.32 11.42 0.00 57.50
## eduRatio 8 155 0.85 0.24 0.94 0.87 0.12 0.17 1.50
## labourRatio 9 155 0.71 0.20 0.75 0.73 0.17 0.19 1.04
## range skew kurtosis se
## Country* 154.00 0.00 -1.22 3.61
## lifeExp 34.50 -0.76 -0.15 0.67
## GNI* 154.00 0.00 -1.22 3.61
## educationExp 14.80 -0.20 -0.34 0.23
## birthRate 204.20 1.13 0.89 3.30
## mortality 1099.00 2.03 4.16 17.01
## repr.parliament 57.50 0.55 -0.10 0.92
## eduRatio 1.33 -0.76 0.55 0.02
## labourRatio 0.85 -0.87 0.05 0.02
Below you can see the summaries of the variables in the “human” data. Additionally, there is a graphical overview of the data. According to the pairs, there are some correlations between variables, e.g. positive correlation between life expectancy and education expectancy, and negative education expectaion and birth rate.
## Country lifeExp GNI educationExp
## Afghanistan: 1 Min. :49.00 1,123 : 1 Min. : 5.40
## Albania : 1 1st Qu.:66.30 1,228 : 1 1st Qu.:11.25
## Algeria : 1 Median :74.20 1,428 : 1 Median :13.50
## Argentina : 1 Mean :71.65 1,458 : 1 Mean :13.18
## Armenia : 1 3rd Qu.:77.25 1,507 : 1 3rd Qu.:15.20
## Australia : 1 Max. :83.50 1,583 : 1 Max. :20.20
## (Other) :149 (Other):149
## birthRate mortality repr.parliament eduRatio
## Min. : 0.60 Min. : 1.0 Min. : 0.00 Min. :0.1717
## 1st Qu.: 12.65 1st Qu.: 11.5 1st Qu.:12.40 1st Qu.:0.7264
## Median : 33.60 Median : 49.0 Median :19.30 Median :0.9375
## Mean : 47.16 Mean : 149.1 Mean :20.91 Mean :0.8529
## 3rd Qu.: 71.95 3rd Qu.: 190.0 3rd Qu.:27.95 3rd Qu.:0.9968
## Max. :204.80 Max. :1100.0 Max. :57.50 Max. :1.4967
##
## labourRatio
## Min. :0.1857
## 1st Qu.:0.5984
## Median :0.7535
## Mean :0.7074
## 3rd Qu.:0.8535
## Max. :1.0380
##


In order to explore the variables and their relationship more closely, I removed the GIN and country - variables. As above, also the graphical overview below support my earlier observation: positive correlation between life expectancy and education expectancy, similarly with birth rate and mortality.
require(MASS)
require(dplyr)
keep <- c("lifeExp", "educationExp", "birthRate", "mortality","repr.parliament", "eduRatio", "labourRatio")
human <- dplyr::select(human, one_of(keep))
p <- ggpairs(human, mapping = aes(alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))
p

cor_matrix <- cor(human)
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)

cor(human) %>%
corrplot()

The first two principal component dimensions (PC1 and PC2) interprets that correlation between the ratio of labour force participation of females and males in each country and percetange of female representatives in parliament is rather strong, stronger than PC2, which is the correlation between the ratio of Female and Male populations with secondary education in each country, expected years of schooling and life expectancy at birth.
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## 54.7 18.5 10.8 7.0 4.2 3.1 1.7

Lastly, I will take a look at the “tea” dataset from the package FactoMineR. Below you can see the structure and dimensions of the dataset. Altogether it has 300 observations and 36 variables. Since the 36 variables was a bit too much, I decided to use only few variables for the MC-analysis. The MCA graph indicates that people who use tea shops to purchase their tea, also drink loose leaf tea. In contrast, people who drink Earl grey -tea, use tea with milk and sugar and outside lunch hours. Naturally, also the tea drinkers who purchase their tea from both normal store and tea shops, also use both tea bags and loose leaf tea.
## [1] 300 36
## vars n mean sd median trimmed mad min max range
## breakfast* 1 300 1.52 0.50 2 1.52 0.00 1 2 1
## tea.time* 2 300 1.56 0.50 2 1.58 0.00 1 2 1
## evening* 3 300 1.66 0.48 2 1.70 0.00 1 2 1
## lunch* 4 300 1.85 0.35 2 1.94 0.00 1 2 1
## dinner* 5 300 1.93 0.26 2 2.00 0.00 1 2 1
## always* 6 300 1.66 0.48 2 1.70 0.00 1 2 1
## home* 7 300 1.03 0.17 1 1.00 0.00 1 2 1
## work* 8 300 1.29 0.45 1 1.24 0.00 1 2 1
## tearoom* 9 300 1.19 0.40 1 1.12 0.00 1 2 1
## friends* 10 300 1.35 0.48 1 1.31 0.00 1 2 1
## resto* 11 300 1.26 0.44 1 1.20 0.00 1 2 1
## pub* 12 300 1.21 0.41 1 1.14 0.00 1 2 1
## Tea* 13 300 1.86 0.58 2 1.83 0.00 1 3 2
## How* 14 300 1.62 0.92 1 1.49 0.00 1 4 3
## sugar* 15 300 1.48 0.50 1 1.48 0.00 1 2 1
## how* 16 300 1.55 0.70 1 1.44 0.00 1 3 2
## where* 17 300 1.46 0.67 1 1.32 0.00 1 3 2
## price* 18 300 3.86 2.16 5 3.95 1.48 1 6 5
## age 19 300 37.05 16.87 32 34.94 16.31 15 90 75
## sex* 20 300 1.41 0.49 1 1.38 0.00 1 2 1
## SPC* 21 300 3.63 1.95 3 3.62 2.97 1 7 6
## Sport* 22 300 1.60 0.49 2 1.62 0.00 1 2 1
## age_Q* 23 300 2.61 1.42 2 2.52 1.48 1 5 4
## frequency* 24 300 2.33 1.04 3 2.29 1.48 1 4 3
## escape.exoticism* 25 300 1.53 0.50 2 1.53 0.00 1 2 1
## spirituality* 26 300 1.31 0.46 1 1.27 0.00 1 2 1
## healthy* 27 300 1.30 0.46 1 1.25 0.00 1 2 1
## diuretic* 28 300 1.42 0.49 1 1.40 0.00 1 2 1
## friendliness* 29 300 1.19 0.40 1 1.12 0.00 1 2 1
## iron.absorption* 30 300 1.90 0.30 2 2.00 0.00 1 2 1
## feminine* 31 300 1.57 0.50 2 1.59 0.00 1 2 1
## sophisticated* 32 300 1.72 0.45 2 1.77 0.00 1 2 1
## slimming* 33 300 1.15 0.36 1 1.06 0.00 1 2 1
## exciting* 34 300 1.61 0.49 2 1.64 0.00 1 2 1
## relaxing* 35 300 1.62 0.49 2 1.65 0.00 1 2 1
## effect.on.health* 36 300 1.78 0.41 2 1.85 0.00 1 2 1
## skew kurtosis se
## breakfast* -0.08 -2.00 0.03
## tea.time* -0.25 -1.94 0.03
## evening* -0.66 -1.57 0.03
## lunch* -1.99 1.96 0.02
## dinner* -3.35 9.28 0.01
## always* -0.66 -1.57 0.03
## home* 5.48 28.16 0.01
## work* 0.92 -1.16 0.03
## tearoom* 1.55 0.39 0.02
## friends* 0.64 -1.59 0.03
## resto* 1.07 -0.86 0.03
## pub* 1.42 0.01 0.02
## Tea* 0.02 -0.21 0.03
## How* 1.05 -0.41 0.05
## sugar* 0.07 -2.00 0.03
## how* 0.86 -0.53 0.04
## where* 1.14 0.03 0.04
## price* -0.36 -1.65 0.12
## age 0.88 -0.10 0.97
## sex* 0.38 -1.86 0.03
## SPC* 0.09 -1.39 0.11
## Sport* -0.39 -1.85 0.03
## age_Q* 0.32 -1.30 0.08
## frequency* -0.09 -1.34 0.06
## escape.exoticism* -0.11 -2.00 0.03
## spirituality* 0.80 -1.36 0.03
## healthy* 0.87 -1.25 0.03
## diuretic* 0.32 -1.90 0.03
## friendliness* 1.55 0.39 0.02
## iron.absorption* -2.59 4.74 0.02
## feminine* -0.28 -1.93 0.03
## sophisticated* -0.96 -1.09 0.03
## slimming* 1.95 1.81 0.02
## exciting* -0.46 -1.79 0.03
## relaxing* -0.51 -1.75 0.03
## effect.on.health* -1.35 -0.19 0.02
## Warning: attributes are not identical across measure variables; they will
## be dropped

## 'data.frame': 300 obs. of 6 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## vars n mean sd median trimmed mad min max range skew kurtosis
## Tea* 1 300 1.86 0.58 2 1.83 0 1 3 2 0.02 -0.21
## How* 2 300 1.62 0.92 1 1.49 0 1 4 3 1.05 -0.41
## how* 3 300 1.55 0.70 1 1.44 0 1 3 2 0.86 -0.53
## sugar* 4 300 1.48 0.50 1 1.48 0 1 2 1 0.07 -2.00
## where* 5 300 1.46 0.67 1 1.32 0 1 3 2 1.14 0.03
## lunch* 6 300 1.85 0.35 2 1.94 0 1 2 1 -1.99 1.96
## se
## Tea* 0.03
## How* 0.05
## how* 0.04
## sugar* 0.03
## where* 0.04
## lunch* 0.02
## Warning: attributes are not identical across measure variables; they will
## be dropped

##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.279 0.261 0.219 0.189 0.177 0.156
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.144 0.141 0.117 0.087 0.062
## % of var. 7.841 7.705 6.392 4.724 3.385
## Cumulative % of var. 77.794 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898
## cos2 v.test Dim.3 ctr cos2 v.test
## black 0.003 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 0.027 2.867 | 0.433 9.160 0.338 10.053 |
## green 0.107 -5.669 | -0.108 0.098 0.001 -0.659 |
## alone 0.127 -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 0.035 3.226 | 1.329 14.771 0.218 8.081 |
## milk 0.020 2.422 | 0.013 0.003 0.000 0.116 |
## other 0.102 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag 0.161 -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 0.478 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged 0.141 -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
